Convolutional Neural Networks

Project: Write an Algorithm for a Dog Identification App


In this notebook, some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this project. You will not need to modify the included code beyond what is requested. Sections that begin with '(IMPLEMENTATION)' in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section, and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!

Note: Once you have completed all of the code implementations, you need to finalize your work by exporting the Jupyter Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission.

In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question X' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.

Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode.

The rubric contains optional "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. If you decide to pursue the "Stand Out Suggestions", you should include the code in this Jupyter notebook.


Why We're Here

In this notebook, you will make the first steps towards developing an algorithm that could be used as part of a mobile or web app. At the end of this project, your code will accept any user-supplied image as input. If a dog is detected in the image, it will provide an estimate of the dog's breed. If a human is detected, it will provide an estimate of the dog breed that is most resembling. The image below displays potential sample output of your finished project (... but we expect that each student's algorithm will behave differently!).

Sample Dog Output

In this real-world setting, you will need to piece together a series of models to perform different tasks; for instance, the algorithm that detects humans in an image will be different from the CNN that infers dog breed. There are many points of possible failure, and no perfect algorithm exists. Your imperfect solution will nonetheless create a fun user experience!

The Road Ahead

We break the notebook into separate steps. Feel free to use the links below to navigate the notebook.

  • Step 0: Import Datasets
  • Step 1: Detect Humans
  • Step 2: Detect Dogs
  • Step 3: Create a CNN to Classify Dog Breeds (from Scratch)
  • Step 4: Create a CNN to Classify Dog Breeds (using Transfer Learning)
  • Step 5: Write your Algorithm
  • Step 6: Test Your Algorithm

Step 0: Import Datasets

Make sure that you've downloaded the required human and dog datasets:

  • Download the dog dataset. Unzip the folder and place it in this project's home directory, at the location /dogImages.

  • Download the human dataset. Unzip the folder and place it in the home directory, at location /lfw.

Note: If you are using a Windows machine, you are encouraged to use 7zip to extract the folder.

In the code cell below, we save the file paths for both the human (LFW) dataset and dog dataset in the numpy arrays human_files and dog_files.

In [60]:
import cv2 
import gc 
import random
import torch
import torchvision 
from torch.optim import lr_scheduler
import matplotlib.pyplot as plt
%matplotlib inline
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
    print('Not available, CUDA!!!  Training on CPU ...')
else:
    print('CUDA is available!  Training on GPU ...')

print('torch: ', torch.__version__, ', torchvision: ', torchvision.__version__)
CUDA is available!  Training on GPU ...
torch:  1.5.0+cu101 , torchvision:  0.6.0+cu101
In [2]:
import numpy as np
from glob import glob

local_path = '/home/philip/Shares/local/philip/ML_Data/Udacity/DeepLearning/dogbreed/'
# load filenames for human and dog images
human_files = np.array(glob(local_path+"lfw/*/*/*"))
dog_files = np.array(glob(local_path+"dogImages/*/*/*"))

# print number of images in each dataset
print('There are %d total human images.' % len(human_files))
print('There are %d total dog images.' % len(dog_files))
There are 18982 total human images.
There are 8351 total dog images.

Step 1: Detect Humans

In this section, we use OpenCV's implementation of Haar feature-based cascade classifiers to detect human faces in images.

OpenCV provides many pre-trained face detectors, stored as XML files on github. We have downloaded one of these detectors and stored it in the haarcascades directory. In the next code cell, we demonstrate how to use this detector to find human faces in a sample image.

In [3]:
              
# extract pre-trained face detector
face_cascade = cv2.CascadeClassifier('haarcascades/haarcascade_frontalface_alt.xml')

# load color (BGR) image
img = cv2.imread(human_files[0])
# convert BGR image to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

# find faces in image
faces = face_cascade.detectMultiScale(gray)

# print number of faces detected in the image
print('Number of faces detected:', len(faces))

# get bounding box for each detected face
for (x,y,w,h) in faces:
    # add bounding box to color image
    cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
    
# convert BGR image to RGB for plotting
cv_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)

# display the image, along with bounding box
plt.imshow(cv_rgb)
plt.show()
Number of faces detected: 1

Before using any of the face detectors, it is standard procedure to convert the images to grayscale. The detectMultiScale function executes the classifier stored in face_cascade and takes the grayscale image as a parameter.

In the above code, faces is a numpy array of detected faces, where each row corresponds to a detected face. Each detected face is a 1D array with four entries that specifies the bounding box of the detected face. The first two entries in the array (extracted in the above code as x and y) specify the horizontal and vertical positions of the top left corner of the bounding box. The last two entries in the array (extracted here as w and h) specify the width and height of the box.

Write a Human Face Detector

We can use this procedure to write a function that returns True if a human face is detected in an image and False otherwise. This function, aptly named face_detector, takes a string-valued file path to an image as input and appears in the code block below.

In [4]:
# returns "True" if face is detected in image stored at img_path
def face_detector(img_path):
    img = cv2.imread(img_path)
    gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    faces = face_cascade.detectMultiScale(gray)
    return len(faces) > 0

(IMPLEMENTATION) Assess the Human Face Detector

Question 1: Use the code cell below to test the performance of the face_detector function.

  • What percentage of the first 100 images in human_files have a detected human face?
  • What percentage of the first 100 images in dog_files have a detected human face?

Ideally, we would like 100% of human images with a detected face and 0% of dog images with a detected face. You will see that our algorithm falls short of this goal, but still gives acceptable performance. We extract the file paths for the first 100 images from each of the datasets and store them in the numpy arrays human_files_short and dog_files_short.

In [ ]:
from tqdm import tqdm

human_files_short = human_files[:100]
dog_files_short = dog_files[:100]

Answer: (You can print out your results and/or write your percentages in this cell)

In [8]:
%%time

## TODO: Test the performance of the face_detector algorithm 
## on the images in human_files_short and dog_files_short.

# Percentage of the first 100 images in human_files have a detected human face
detected_hfaces_h = [face_detector(img_path) for img_path in human_files_short]
human_perc_h = sum(detected_hfaces_h)/100    

# Percentage of the first 100 images in dog_files have a detected human face
detected_hfaces_d = [face_detector(img_path) for img_path in dog_files_short]
human_perc_d = sum(detected_hfaces_d)/100

print('+ The percentage of the first 100 images in human_files have a detected human face, ',human_perc_h)
      
      
print('+ The percentage of the first 100 images in dog_files have a detected human face, ', human_perc_d)
The percentage of the first 100 images in human_files have a detected human face,  0.98
Percentage of the first 100 images in dog_files have a detected human face,  0.16
CPU times: user 57.6 s, sys: 95.1 ms, total: 57.7 s
Wall time: 9.78 s

We suggest the face detector from OpenCV as a potential way to detect human images in your algorithm, but you are free to explore other approaches, especially approaches that make use of deep learning :). Please use the code cell below to design and test your own face detection algorithm. If you decide to pursue this optional task, report performance on human_files_short and dog_files_short.


Step 2: Detect Dogs

In this section, we use a pre-trained model to detect dogs in images.

Obtain Pre-trained VGG-16 Model

The code cell below downloads the VGG-16 model, along with weights that have been trained on ImageNet, a very large, very popular dataset used for image classification and other vision tasks. ImageNet contains over 10 million URLs, each linking to an image containing an object from one of 1000 categories.

In [10]:
import torch
import torchvision.models as models

# define VGG16 model
VGG16 = models.vgg16(pretrained=True)

# move model to GPU if CUDA is available
if train_on_gpu:
    VGG16 = VGG16.cuda()
In [11]:
VGG16
Out[11]:
VGG(
  (features): Sequential(
    (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (1): ReLU(inplace=True)
    (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (3): ReLU(inplace=True)
    (4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (6): ReLU(inplace=True)
    (7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (8): ReLU(inplace=True)
    (9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (11): ReLU(inplace=True)
    (12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (13): ReLU(inplace=True)
    (14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (15): ReLU(inplace=True)
    (16): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (17): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (18): ReLU(inplace=True)
    (19): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (20): ReLU(inplace=True)
    (21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (22): ReLU(inplace=True)
    (23): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (24): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (25): ReLU(inplace=True)
    (26): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (27): ReLU(inplace=True)
    (28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (29): ReLU(inplace=True)
    (30): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  )
  (avgpool): AdaptiveAvgPool2d(output_size=(7, 7))
  (classifier): Sequential(
    (0): Linear(in_features=25088, out_features=4096, bias=True)
    (1): ReLU(inplace=True)
    (2): Dropout(p=0.5, inplace=False)
    (3): Linear(in_features=4096, out_features=4096, bias=True)
    (4): ReLU(inplace=True)
    (5): Dropout(p=0.5, inplace=False)
    (6): Linear(in_features=4096, out_features=1000, bias=True)
  )
)

Given an image, this pre-trained VGG-16 model returns a prediction (derived from the 1000 possible categories in ImageNet) for the object that is contained in the image.

In [12]:
# Let's test a dog file
BGR_dog_img = cv2.imread(dog_files[0])
dog_img = cv2.cvtColor(BGR_dog_img, cv2.COLOR_BGR2RGB)
print(dog_img.shape)
plt.imshow(dog_img)
plt.show()
(688, 1024, 3)

(IMPLEMENTATION) Making Predictions with a Pre-trained Model

In the next code cell, you will write a function that accepts a path to an image (such as 'dogImages/train/001.Affenpinscher/Affenpinscher_00001.jpg') as input and returns the index corresponding to the ImageNet class that is predicted by the pre-trained VGG-16 model. The output should always be an integer between 0 and 999, inclusive.

Before writing the function, make sure that you take the time to learn how to appropriately pre-process tensors for pre-trained models in the PyTorch documentation.

In [13]:
from PIL import Image
import torchvision.transforms as transforms

# Set PIL to be tolerant of image files that are truncated.
from PIL import ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True

normalize = transforms.Compose([
    transforms.ToPILImage(),
    transforms.Resize(255), 
    transforms.CenterCrop(224),  
    transforms.RandomHorizontalFlip(),
    transforms.ToTensor(),
    transforms.Normalize(mean=[0.485, 0.456, 0.406],
                                std=[0.229, 0.224, 0.225])])

def VGG16_predict(img_path):
    '''
    Use pre-trained VGG-16 model to obtain index corresponding to 
    predicted ImageNet class for image at specified path
    
    Args:
        img_path: path to an image
        
    Returns:
        Index corresponding to VGG-16 model's prediction
    '''
    
    ## TODO: Complete the function.
    ## Load and pre-process an image from the given img_path
    BGR_img = cv2.imread(img_path)
    RGB_img = cv2.cvtColor(BGR_img, cv2.COLOR_BGR2RGB)

    # plt.imshow(RGB_img)
    # plt.show()
    
    RGB_img = normalize(RGB_img)

    # output the image
    # print(RGB_img.shape)


    # move model inputs to cuda, if GPU available
    if train_on_gpu:
        RGB_img = RGB_img.cuda()

    ## Return the *index* of the predicted class for that image
    # get sample outputs
    with torch.no_grad():
        output = VGG16(RGB_img.unsqueeze(0))
        torch.cuda.empty_cache()
    # convert output probabilities to predicted class
    _, preds_tensor = torch.max(output, 1)
    pred = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())

    gc.collect()
    
    return pred # predicted class index
In [14]:
img_path = dog_files[276]
pred = VGG16_predict(img_path)
print(img_path)

BGR_img = cv2.imread(img_path)
RGB_img = cv2.cvtColor(BGR_img, cv2.COLOR_BGR2RGB)

plt.imshow(RGB_img)
plt.show()
print(pred)
/home/philip/Shares/local/philip/ML_Data/Udacity/DeepLearning/dogbreed/dogImages/valid/071.German_shepherd_dog/German_shepherd_dog_04897.jpg
235
In [15]:
import urllib.request, json 

imagenet_class_url = "https://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json"

with urllib.request.urlopen(imagenet_class_url) as url:
    class_idx = json.loads(url.read().decode())
    idx2label = [class_idx[str(k)][1] for k in range(len(class_idx))]
    print(idx2label[pred])
German_shepherd

(IMPLEMENTATION) Write a Dog Detector

While looking at the dictionary, you will notice that the categories corresponding to dogs appear in an uninterrupted sequence and correspond to dictionary keys 151-268, inclusive, to include all categories from 'Chihuahua' to 'Mexican hairless'. Thus, in order to check to see if an image is predicted to contain a dog by the pre-trained VGG-16 model, we need only check if the pre-trained model predicts an index between 151 and 268 (inclusive).

Use these ideas to complete the dog_detector function below, which returns True if a dog is detected in an image (and False if not).

In [16]:
### returns "True" if a dog is detected in the image stored at img_path
def dog_detector(img_path):
    ## TODO: Complete the function.
    idx = VGG16_predict(img_path)
    return True if (151 <= idx) & (idx <= 268) else False    

(IMPLEMENTATION) Assess the Dog Detector

Question 2: Use the code cell below to test the performance of your dog_detector function.

  • What percentage of the images in human_files_short have a detected dog?
  • What percentage of the images in dog_files_short have a detected dog?

Answer:

In [17]:
%%time 

### TODO: Test the performance of the dog_detector function
### on the images in human_files_short and dog_files_short.

# Percentage of the first 100 images in human_files have a detected human face
detected_dfaces_h = 0
for img_path in human_files_short:
    detected_dfaces_h += dog_detector(img_path) 
dog_perc_h = detected_dfaces_h/100    

# Percentage of the first 100 images in dog_files have a detected human face
detected_dfaces_d = 0 
for img_path in dog_files_short:
    detected_dfaces_d += dog_detector(img_path)
dog_perc_d = detected_dfaces_d/100

print('+ The percentage of the first 100 images in human_files have a detected human face: ', dog_perc_h)
print('+ The percentage of the first 100 images in dog_files have a detected human face: ', dog_perc_d)
Percentage of the first 100 images in human_files have a detected human face:  0.0
Percentage of the first 100 images in dog_files have a detected human face:  0.99
CPU times: user 18.3 s, sys: 1.5 s, total: 19.8 s
Wall time: 10.9 s

We suggest VGG-16 as a potential network to detect dog images in your algorithm, but you are free to explore other pre-trained networks (such as Inception-v3, ResNet-50, etc). Please use the code cell below to test other pre-trained PyTorch models. If you decide to pursue this optional task, report performance on human_files_short and dog_files_short.


Step 3: Create a CNN to Classify Dog Breeds (from Scratch)

Now that we have functions for detecting humans and dogs in images, we need a way to predict breed from images. In this step, you will create a CNN that classifies dog breeds. You must create your CNN from scratch (so, you can't use transfer learning yet!), and you must attain a test accuracy of at least 10%. In Step 4 of this notebook, you will have the opportunity to use transfer learning to create a CNN that attains greatly improved accuracy.

We mention that the task of assigning breed to dogs from images is considered exceptionally challenging. To see why, consider that even a human would have trouble distinguishing between a Brittany and a Welsh Springer Spaniel.

Brittany Welsh Springer Spaniel

It is not difficult to find other dog breed pairs with minimal inter-class variation (for instance, Curly-Coated Retrievers and American Water Spaniels).

Curly-Coated Retriever American Water Spaniel

Likewise, recall that labradors come in yellow, chocolate, and black. Your vision-based algorithm will have to conquer this high intra-class variation to determine how to classify all of these different shades as the same breed.

Yellow Labrador Chocolate Labrador Black Labrador

We also mention that random chance presents an exceptionally low bar: setting aside the fact that the classes are slightly imbalanced, a random guess will provide a correct answer roughly 1 in 133 times, which corresponds to an accuracy of less than 1%.

Remember that the practice is far ahead of the theory in deep learning. Experiment with many different architectures, and trust your intuition. And, of course, have fun!

(IMPLEMENTATION) Specify Data Loaders for the Dog Dataset

Use the code cell below to write three separate data loaders for the training, validation, and test datasets of dog images (located at dogImages/train, dogImages/valid, and dogImages/test, respectively). You may find this documentation on custom datasets to be a useful resource. If you are interested in augmenting your training and/or validation data, check out the wide variety of transforms!

In [18]:
import os
from torchvision import datasets

# Define transforms for the training data and testing data
data_transforms = {
      'train': transforms.Compose([
                  transforms.RandomRotation(30),
                  transforms.RandomResizedCrop(224),
                  transforms.RandomHorizontalFlip(),
                  transforms.ToTensor(),
                  transforms.Normalize(mean=[0.485, 0.456, 0.406],
                        std=[0.229, 0.224, 0.225])]),


      'test': transforms.Compose([
                  transforms.Resize(255),
                  transforms.CenterCrop(224),
                  transforms.ToTensor(),                                                               transforms.Normalize(mean=[0.485, 0.456, 0.406], 
                        std=[0.229, 0.224, 0.225])])
}

data_transforms['valid'] = data_transforms['train']

# set up the image folders
data_dir = local_path + 'dogImages/'
image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x), 
                        data_transforms[x])
                  for x in ['train', 'valid', 'test']}

# prepare data loaders (combine dataset and sampler)
dataloaders = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=32,
                                             shuffle=True, num_workers=6)
              for x in ['train', 'valid', 'test']}
dataset_sizes = {x: len(image_datasets[x]) for x in ['train', 'valid', 'test']}
class_names = image_datasets['train'].classes
print(dataset_sizes, '& Number of class names:', len(class_names))
{'train': 6680, 'valid': 835, 'test': 836} & Number of class names: 133

Question 3: Describe your chosen procedure for preprocessing the data.

  • How does your code resize the images (by cropping, stretching, etc)? What size did you pick for the input tensor, and why?
  • Did you decide to augment the dataset? If so, how (through translations, flips, rotations, etc)? If not, why not?

Answer:

  • Since we will follow the standard of ImageNet datasets to use the VGG16 model later on. The input tensor is set to (3, 224, 224), with each batch contains 32 images. Therefore, the Resize and CenterCrop transformations are applied for all the images.

  • For the training data, we also do augemtation to increase the amount of data, hence, a combination of RandomRotation, RandomResizedCrop, RandomHorizontalFlip is used.

In [19]:
def imshow(inp, title=None):
    """Imshow for Tensor."""
    inp = inp.transpose((1, 2, 0))
    mean = np.array([0.485, 0.456, 0.406])
    std = np.array([0.229, 0.224, 0.225])
    inp = std * inp + mean
    inp = np.clip(inp, 0, 1)
    plt.imshow(inp)
    if title is not None:
        plt.title(title)
    plt.pause(0.001)  # pause a bit so that plots are updated
In [20]:
# obtain one batch of training images and visualise some images
dataiter = iter(dataloaders['test'])
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for 
print(images.shape)

# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
# display 20 images
for idx in np.arange(20):
    ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
    imshow(images[idx])
    ax.set_title(class_names[labels[idx]])
(32, 3, 224, 224)

(IMPLEMENTATION) Model Architecture

Create a CNN to classify dog breed. Use the template in the code cell below.

In [36]:
import torch.nn as nn
import torch.nn.functional as F

# define the CNN architecture
class Net(nn.Module):
    ### TODO: choose an architecture, and complete the class
    def __init__(self):
        super(Net, self).__init__()
        ## Define layers of a CNN
        self.conv1 = nn.Conv2d(3, 8, 5, padding=2)      
        self.conv2 = nn.Conv2d(8, 16, 3, padding=1, stride=2)
        self.conv3 = nn.Conv2d(16, 64, 3, padding=1)
        self.pool = nn.MaxPool2d(2,2)

        self.fc1 = nn.Linear(64 * 28 * 28, 1000)
        self.fc2 = nn.Linear(1000, len(class_names))

        self.dropout = nn.Dropout(p=0.2)


    
    def forward(self, x):
        ## Define forward behavior
        x = self.pool(F.relu(self.conv1(x)))
        x = F.relu(self.conv2(x))
        x = self.pool(F.relu(self.conv3(x)))
        x = self.dropout(x)
#         print(x.shape)
        # flatten image input
        x = x.view(-1, 64 * 28 * 28)
        # add dropout layer
        x = self.dropout(x)
        # add 1st hidden layer, with relu activation function
        x = F.relu(self.fc1(x))
        # add dropout layer
        x = self.dropout(x)
        # add 2nd hidden layer, with relu activation function
        x = self.fc2(x)
        return x

#-#-# You do NOT have to modify the code below this line. #-#-#

# instantiate the CNN
model_scratch = Net()

# move tensors to GPU if CUDA is available
if train_on_gpu:
    device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
    print(device)
    model_scratch.to(device)

model_scratch
cuda
Out[36]:
Net(
  (conv1): Conv2d(3, 8, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))
  (conv2): Conv2d(8, 16, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
  (conv3): Conv2d(16, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  (pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  (fc1): Linear(in_features=50176, out_features=1000, bias=True)
  (fc2): Linear(in_features=1000, out_features=133, bias=True)
  (dropout): Dropout(p=0.2, inplace=False)
)

Question 4: Outline the steps you took to get to your final CNN architecture and your reasoning at each step.

Answer:

After tested out several CNN architectures, we arrived with the current one since we do not have a large amount of data.

There are three Conv2d layers and two corresponding MaxPool2d layers. We also apply the Dropout with p=0.2 to prevent overfitting. After flatten out, we apply two other Linear layers before output the last features of length is class_names = 133.

(IMPLEMENTATION) Specify Loss Function and Optimizer

Use the next code cell to specify a loss function and optimizer. Save the chosen loss function as criterion_scratch, and the optimizer as optimizer_scratch below.

In [37]:
import torch.optim as optim

### TODO: select loss function
criterion_scratch = nn.CrossEntropyLoss()

### TODO: select optimizer
#optimizer_scratch = optim.Adam(model_scratch.parameters(), lr=0.01)
optimizer_scratch = optim.SGD(model_scratch.parameters(), lr=0.01, momentum=0.9)

# Decay LR by a factor of 0.1 every 10 epochs
exp_lr_scheduler = lr_scheduler.StepLR(optimizer_scratch, step_size=7, gamma=0.3)

(IMPLEMENTATION) Train and Validate the Model

Train and validate your model in the code cell below. Save the final model parameters at filepath 'model_scratch.pt'.

In [38]:
# the following import is required for training to be robust to truncated images
from PIL import ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True

def train(n_epochs, loaders, model, optimizer, criterion, use_cuda, save_path):
    """returns trained model"""
    # initialize tracker for minimum validation loss
    valid_loss_min = np.Inf 
    
    for epoch in range(1, n_epochs+1):
        # initialize variables to monitor training and validation loss
        train_loss = 0.0
        valid_loss = 0.0
        
        ###################
        # train the model #
        ###################
        model.train()
        for batch_idx, (data, target) in enumerate(loaders['train']):
            # move to GPU
            if use_cuda:
                data, target = data.cuda(), target.cuda()
                # print(data.shape, target.shape)
            optimizer.zero_grad()
            ## find the loss and update the model parameters accordingly
            output = model(data)
            # calculate the batch loss
            loss = criterion(output, target)
            # backward pass: compute gradient of the loss with respect to model parameters
            loss.backward()
            # perform a single optimization step (parameter update)
            optimizer.step()

            ## record the average training loss,
            train_loss = train_loss + ((1 / (batch_idx + 1)) * (loss.data - train_loss))
            
        ######################    
        # validate the model #
        ######################
        model.eval()
        for batch_idx, (data, target) in enumerate(loaders['valid']):
            # move to GPU
            if use_cuda:
                data, target = data.cuda(), target.cuda()

            output = model(data)
            # calculate the batch loss
            loss = criterion(output, target)

            ## update the average validation loss
            valid_loss = valid_loss + ((1 / (batch_idx + 1)) * (loss.data - valid_loss))

        # save model if validation loss has decreased
        if valid_loss <= valid_loss_min:
            print('Validation loss decreased ({:.6f} --> {:.6f}).  Saving model ...'.format(
            valid_loss_min,
            valid_loss))
            torch.save(model.state_dict(), save_path)
            valid_loss_min = valid_loss


        # print training/validation statistics 
        print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
            epoch, 
            train_loss,
            valid_loss
            ))
        gc.collect()
        torch.cuda.empty_cache()
        
        ## TODO: save the model if validation loss has decreased
            
    # return trained model
    return model
In [39]:
# train the model
model_scratch = train(50, dataloaders, model_scratch, optimizer_scratch, 
                      criterion_scratch, train_on_gpu, 'model_scratch.pt')
Validation loss decreased (inf --> 4.826581).  Saving model ...
Epoch: 1 	Training Loss: 4.864693 	Validation Loss: 4.826581
Validation loss decreased (4.826581 --> 4.740669).  Saving model ...
Epoch: 2 	Training Loss: 4.737045 	Validation Loss: 4.740669
Validation loss decreased (4.740669 --> 4.622540).  Saving model ...
Epoch: 3 	Training Loss: 4.630765 	Validation Loss: 4.622540
Validation loss decreased (4.622540 --> 4.549173).  Saving model ...
Epoch: 4 	Training Loss: 4.590758 	Validation Loss: 4.549173
Epoch: 5 	Training Loss: 4.539237 	Validation Loss: 4.576294
Validation loss decreased (4.549173 --> 4.512720).  Saving model ...
Epoch: 6 	Training Loss: 4.497324 	Validation Loss: 4.512720
Validation loss decreased (4.512720 --> 4.492928).  Saving model ...
Epoch: 7 	Training Loss: 4.439075 	Validation Loss: 4.492928
Validation loss decreased (4.492928 --> 4.386450).  Saving model ...
Epoch: 8 	Training Loss: 4.398353 	Validation Loss: 4.386450
Validation loss decreased (4.386450 --> 4.328633).  Saving model ...
Epoch: 9 	Training Loss: 4.344254 	Validation Loss: 4.328633
Validation loss decreased (4.328633 --> 4.310965).  Saving model ...
Epoch: 10 	Training Loss: 4.311636 	Validation Loss: 4.310965
Validation loss decreased (4.310965 --> 4.244311).  Saving model ...
Epoch: 11 	Training Loss: 4.270813 	Validation Loss: 4.244311
Epoch: 12 	Training Loss: 4.226561 	Validation Loss: 4.270984
Epoch: 13 	Training Loss: 4.191391 	Validation Loss: 4.248219
Validation loss decreased (4.244311 --> 4.176008).  Saving model ...
Epoch: 14 	Training Loss: 4.172826 	Validation Loss: 4.176008
Validation loss decreased (4.176008 --> 4.119902).  Saving model ...
Epoch: 15 	Training Loss: 4.152637 	Validation Loss: 4.119902
Epoch: 16 	Training Loss: 4.099391 	Validation Loss: 4.173741
Epoch: 17 	Training Loss: 4.072143 	Validation Loss: 4.120141
Epoch: 18 	Training Loss: 4.047945 	Validation Loss: 4.163062
Epoch: 19 	Training Loss: 4.039827 	Validation Loss: 4.202817
Validation loss decreased (4.119902 --> 4.101514).  Saving model ...
Epoch: 20 	Training Loss: 4.029162 	Validation Loss: 4.101514
Epoch: 21 	Training Loss: 3.966397 	Validation Loss: 4.103434
Epoch: 22 	Training Loss: 3.945113 	Validation Loss: 4.124300
Validation loss decreased (4.101514 --> 4.077306).  Saving model ...
Epoch: 23 	Training Loss: 3.918520 	Validation Loss: 4.077306
Epoch: 24 	Training Loss: 3.911463 	Validation Loss: 4.083992
Validation loss decreased (4.077306 --> 4.035691).  Saving model ...
Epoch: 25 	Training Loss: 3.869854 	Validation Loss: 4.035691
Validation loss decreased (4.035691 --> 4.030469).  Saving model ...
Epoch: 26 	Training Loss: 3.864894 	Validation Loss: 4.030469
Validation loss decreased (4.030469 --> 3.986620).  Saving model ...
Epoch: 27 	Training Loss: 3.849900 	Validation Loss: 3.986620
Epoch: 28 	Training Loss: 3.820456 	Validation Loss: 4.058804
Epoch: 29 	Training Loss: 3.784419 	Validation Loss: 3.995935
Epoch: 30 	Training Loss: 3.792843 	Validation Loss: 4.053333
Epoch: 31 	Training Loss: 3.767396 	Validation Loss: 4.030779
Epoch: 32 	Training Loss: 3.766980 	Validation Loss: 3.987423
Epoch: 33 	Training Loss: 3.693246 	Validation Loss: 4.123504
Epoch: 34 	Training Loss: 3.707134 	Validation Loss: 3.988781
Validation loss decreased (3.986620 --> 3.972089).  Saving model ...
Epoch: 35 	Training Loss: 3.710020 	Validation Loss: 3.972089
Validation loss decreased (3.972089 --> 3.961196).  Saving model ...
Epoch: 36 	Training Loss: 3.704571 	Validation Loss: 3.961196
Epoch: 37 	Training Loss: 3.727495 	Validation Loss: 4.054806
Epoch: 38 	Training Loss: 3.630757 	Validation Loss: 4.022011
Validation loss decreased (3.961196 --> 3.867537).  Saving model ...
Epoch: 39 	Training Loss: 3.647807 	Validation Loss: 3.867537
Epoch: 40 	Training Loss: 3.620328 	Validation Loss: 3.932944
Epoch: 41 	Training Loss: 3.631576 	Validation Loss: 3.916930
Epoch: 42 	Training Loss: 3.627102 	Validation Loss: 4.090998
Epoch: 43 	Training Loss: 3.634383 	Validation Loss: 3.970145
Epoch: 44 	Training Loss: 3.590119 	Validation Loss: 3.938694
Epoch: 45 	Training Loss: 3.576680 	Validation Loss: 4.049429
Epoch: 46 	Training Loss: 3.608443 	Validation Loss: 3.872584
Epoch: 47 	Training Loss: 3.564305 	Validation Loss: 3.980879
Epoch: 48 	Training Loss: 3.554341 	Validation Loss: 4.024620
Epoch: 49 	Training Loss: 3.519001 	Validation Loss: 4.055076
Epoch: 50 	Training Loss: 3.578323 	Validation Loss: 3.902217
Out[39]:
<All keys matched successfully>

(IMPLEMENTATION) Test the Model

Try out your model on the test dataset of dog images. Use the code cell below to calculate and print the test loss and accuracy. Ensure that your test accuracy is greater than 10%.

In [63]:
def test(loaders, model, criterion, use_cuda):

    # monitor test loss and accuracy
    test_loss = 0.
    correct = 0.
    total = 0.

    model.eval()
    for batch_idx, (data, target) in enumerate(loaders['test']):
        # move to GPU
        if use_cuda:
            data, target = data.cuda(), target.cuda()
        # forward pass: compute predicted outputs by passing inputs to the model
        output = model(data)
        # calculate the loss
        loss = criterion(output, target)
        # update average test loss 
        test_loss = test_loss + ((1 / (batch_idx + 1)) * (loss.data - test_loss))
        # convert output probabilities to predicted class
        pred = output.data.max(1, keepdim=True)[1]
        # compare predictions to true label
        correct += np.sum(np.squeeze(pred.eq(target.data.view_as(pred))).cpu().numpy())
        total += data.size(0)
            
    print('Test Loss: {:.6f}\n'.format(test_loss))

    print(f'Test Accuracy: {100. * correct / total}')
In [65]:
# load the model that got the best validation accuracy
model_scratch.load_state_dict(torch.load('model_scratch.pt'))
Out[65]:
<All keys matched successfully>
In [66]:
# call test function    
test(dataloaders, model_scratch, criterion_scratch, train_on_gpu)
Test Loss: 3.578446

Test Accuracy: 15.909090909090908

Step 4: Create a CNN to Classify Dog Breeds (using Transfer Learning)

You will now use transfer learning to create a CNN that can identify dog breed from images. Your CNN must attain at least 60% accuracy on the test set.

(IMPLEMENTATION) Specify Data Loaders for the Dog Dataset

Use the code cell below to write three separate data loaders for the training, validation, and test datasets of dog images (located at dogImages/train, dogImages/valid, and dogImages/test, respectively).

If you like, you are welcome to use the same data loaders from the previous step, when you created a CNN from scratch.

(IMPLEMENTATION) Model Architecture

Use transfer learning to create a CNN to classify dog breed. Use the code cell below, and save your initialized model as the variable model_transfer.

In [42]:
import torchvision.models as models
import torch.nn as nn

## TODO: Specify model architecture 
model_transfer = models.resnet18(pretrained=True)

# print out the model structure
print(model_transfer)
ResNet(
  (conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
  (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  (relu): ReLU(inplace=True)
  (maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
  (layer1): Sequential(
    (0): BasicBlock(
      (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
    (1): BasicBlock(
      (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
  )
  (layer2): Sequential(
    (0): BasicBlock(
      (conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (downsample): Sequential(
        (0): Conv2d(64, 128, kernel_size=(1, 1), stride=(2, 2), bias=False)
        (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (1): BasicBlock(
      (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
  )
  (layer3): Sequential(
    (0): BasicBlock(
      (conv1): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (downsample): Sequential(
        (0): Conv2d(128, 256, kernel_size=(1, 1), stride=(2, 2), bias=False)
        (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (1): BasicBlock(
      (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
  )
  (layer4): Sequential(
    (0): BasicBlock(
      (conv1): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (downsample): Sequential(
        (0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)
        (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (1): BasicBlock(
      (conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    )
  )
  (avgpool): AdaptiveAvgPool2d(output_size=(1, 1))
  (fc): Linear(in_features=512, out_features=1000, bias=True)
)
In [43]:
# Freeze training for all "features" layers
for param in model_transfer.parameters():
    param.requires_grad = False
In [44]:
n_inputs = model_transfer.fc.in_features

# add last linear layer (n_inputs -> 5 flower classes)
# new layers automatically have requires_grad = True
last_layer = nn.Linear(n_inputs, len(class_names))

model_transfer.fc = last_layer

# if GPU is available, move the model to GPU
if train_on_gpu:
    model_transfer = model_transfer.cuda()

# check to see that your last layer produces the expected number of outputs
print(model_transfer.fc.out_features)
133

Question 5: Outline the steps you took to get to your final CNN architecture and your reasoning at each step. Describe why you think the architecture is suitable for the current problem.

Answer:

  • We use the pretrained resnet18 model for our transfer learning task. The requires_grad for these layers are not required to reduce the amount of computation. Note that the new Linear layer transform the last layers into the output of class_names = 133 length.

  • Since the resnet18 architecture is quite powerful, it should be able to detect most of the characteristics of objects in Imagenet, we just need to add the last layers on top to train a good dog breeds' model.

(IMPLEMENTATION) Specify Loss Function and Optimizer

Use the next code cell to specify a loss function and optimizer. Save the chosen loss function as criterion_transfer, and the optimizer as optimizer_transfer below.

In [45]:
criterion_transfer = nn.CrossEntropyLoss()

optimizer_transfer = optim.SGD(model_transfer.parameters(), lr=0.001, momentum=0.9)

# Decay LR by a factor of 0.1 every 10 epochs
exp_lr_scheduler = lr_scheduler.StepLR(optimizer_transfer, step_size=10, gamma=0.1)

(IMPLEMENTATION) Train and Validate the Model

Train and validate your model in the code cell below. Save the final model parameters at filepath 'model_transfer.pt'.

In [46]:
# train the model
model_transfer =  train(10, dataloaders, model_transfer,                     optimizer_transfer, criterion_transfer, train_on_gpu, 'model_transfer.pt')
Validation loss decreased (inf --> 3.970656).  Saving model ...
Epoch: 1 	Training Loss: 4.518369 	Validation Loss: 3.970656
Validation loss decreased (3.970656 --> 3.192210).  Saving model ...
Epoch: 2 	Training Loss: 3.637560 	Validation Loss: 3.192210
Validation loss decreased (3.192210 --> 2.725826).  Saving model ...
Epoch: 3 	Training Loss: 3.028157 	Validation Loss: 2.725826
Validation loss decreased (2.725826 --> 2.363376).  Saving model ...
Epoch: 4 	Training Loss: 2.624784 	Validation Loss: 2.363376
Validation loss decreased (2.363376 --> 2.174071).  Saving model ...
Epoch: 5 	Training Loss: 2.305958 	Validation Loss: 2.174071
Validation loss decreased (2.174071 --> 1.921962).  Saving model ...
Epoch: 6 	Training Loss: 2.088099 	Validation Loss: 1.921962
Validation loss decreased (1.921962 --> 1.833131).  Saving model ...
Epoch: 7 	Training Loss: 1.959951 	Validation Loss: 1.833131
Validation loss decreased (1.833131 --> 1.693688).  Saving model ...
Epoch: 8 	Training Loss: 1.820197 	Validation Loss: 1.693688
Validation loss decreased (1.693688 --> 1.556619).  Saving model ...
Epoch: 9 	Training Loss: 1.722679 	Validation Loss: 1.556619
Epoch: 10 	Training Loss: 1.644215 	Validation Loss: 1.576800

(IMPLEMENTATION) Test the Model

Try out your model on the test dataset of dog images. Use the code cell below to calculate and print the test loss and accuracy. Ensure that your test accuracy is greater than 60%.

In [47]:
# load the model that got the best validation accuracy
model_transfer.load_state_dict(torch.load('model_transfer.pt'))

test(dataloaders, model_transfer, criterion_transfer, train_on_gpu)
Test Loss: 1.065252


Test Accuracy: 77% (648/836)

(IMPLEMENTATION) Predict Dog Breed with the Model

Write a function that takes an image path as input and returns the dog breed (Affenpinscher, Afghan hound, etc) that is predicted by your model.

In [48]:
# list of class names by index, i.e. a name can be accessed like class_names[0]
class_names = [item[4:].replace("_", " ") for item in image_datasets['train'].classes]

idx2label = [class_names[k] for k in range(len(class_names))]
    
In [49]:
### TODO: Write a function that takes a path to an image as input
### and returns the dog breed that is predicted by the model.


def predict_breed_transfer(img_path):
    # load the image and return the predicted breed
    ## TODO: Complete the function.
    ## Load and pre-process an image from the given img_path
    BGR_img = cv2.imread(img_path)
    RGB_img = cv2.cvtColor(BGR_img, cv2.COLOR_BGR2RGB)
    
    RGB_img = normalize(RGB_img)

    # move model inputs to cuda, if GPU available
    if train_on_gpu:
        RGB_img = RGB_img.cuda()

    ## Return the *index* of the predicted class for that image
    # get sample outputs
    with torch.no_grad():
        output = model_transfer(RGB_img.unsqueeze(0))
        torch.cuda.empty_cache()
    # convert output probabilities to predicted class
    _, preds_tensor = torch.max(output, 1)
    pred = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())

    gc.collect()
    
    return idx2label[pred] # predicted class index
In [51]:
print(dog_files[276])
predict_breed_transfer(dog_files[276])
/home/philip/Shares/local/philip/ML_Data/Udacity/DeepLearning/dogbreed/dogImages/valid/071.German_shepherd_dog/German_shepherd_dog_04897.jpg
Out[51]:
'German shepherd dog'

Step 5: Write your Algorithm

Write an algorithm that accepts a file path to an image and first determines whether the image contains a human, dog, or neither. Then,

  • if a dog is detected in the image, return the predicted breed.
  • if a human is detected in the image, return the resembling dog breed.
  • if neither is detected in the image, provide output that indicates an error.

You are welcome to write your own functions for detecting humans and dogs in images, but feel free to use the face_detector and dog_detector functions developed above. You are required to use your CNN from Step 4 to predict dog breed.

Some sample output for our algorithm is provided below, but feel free to design your own user experience!

Sample Human Output

(IMPLEMENTATION) Write your Algorithm

In [52]:
### TODO: Write your algorithm.
### Feel free to use as many code cells as needed.

def run_app(img_path):
    ## handle cases for a human face, dog, and neither
    if dog_detector(img_path):
        print(f"It's a dog of type {predict_breed_transfer(img_path)}")
    elif face_detector(img_path):
        print(f"That's a human, but looks like a {predict_breed_transfer(img_path)}")
    else:
        print('Opp! Something else.')    

Step 6: Test Your Algorithm

In this section, you will take your new algorithm for a spin! What kind of dog does the algorithm think that you look like? If you have a dog, does it predict your dog's breed accurately? If you have a cat, does it mistakenly think that your cat is a dog?

(IMPLEMENTATION) Test Your Algorithm on Sample Images!

Test your algorithm at least six images on your computer. Feel free to use any images you like. Use at least two human and two dog images.

Question 6: Is the output better than you expected :) ? Or worse :( ? Provide at least three possible points of improvement for your algorithm.

Answer: (Three possible points for improvement)

At the moment, we apply the face_cascade function for human tection and the previous transfer learning model for dog detection. Therefore the results might perform very well. However, there are several methods we can use to improve the algorithm.

  • Build a better human face detector using CNN with higher accuracy than the face_cascade.
  • Create a model with three classes of human face, dog face and others to improve the whole process, instead of combined different models together
  • Add more data for our dog and human dataset, tune the hyper-parameters of these models with different infrastructure to improve the model output.
In [59]:
## TODO: Execute your algorithm from Step 6 on
## at least 6 images on your computer.
## Feel free to use as many code cells as needed.

## suggested code, below
for file in np.hstack(([random.choice(human_files) for i in range(3)], [random.choice(dog_files) for i in range(3)])):
    print(f'******** {file} ************')
    BGR_img = cv2.imread(file)
    img = cv2.cvtColor(BGR_img, cv2.COLOR_BGR2RGB)
    plt.imshow(img)
    run_app(file)
    plt.show()
******** /home/philip/Shares/local/philip/ML_Data/Udacity/DeepLearning/dogbreed/lfw/lfw/Bill_Pryor/Bill_Pryor_0001.jpg ************
That's a human, but looks like a Collie
******** /home/philip/Shares/local/philip/ML_Data/Udacity/DeepLearning/dogbreed/lfw/lfw/William_Ford_Jr/William_Ford_Jr_0005.jpg ************
That's a human, but looks like a Dogue de bordeaux
******** /home/philip/Shares/local/philip/ML_Data/Udacity/DeepLearning/dogbreed/lfw/lfw/Kofi_Annan/Kofi_Annan_0020.jpg ************
That's a human, but looks like a Bloodhound
******** /home/philip/Shares/local/philip/ML_Data/Udacity/DeepLearning/dogbreed/dogImages/valid/017.Bearded_collie/Bearded_collie_01222.jpg ************
It's a dog of type Bearded collie
******** /home/philip/Shares/local/philip/ML_Data/Udacity/DeepLearning/dogbreed/dogImages/train/020.Belgian_malinois/Belgian_malinois_01437.jpg ************
It's a dog of type Belgian malinois
******** /home/philip/Shares/local/philip/ML_Data/Udacity/DeepLearning/dogbreed/dogImages/valid/046.Cavalier_king_charles_spaniel/Cavalier_king_charles_spaniel_03273.jpg ************
It's a dog of type Cavalier king charles spaniel